Recently, Vectara released a report titled 'Hallucination Ranking', comparing the performance of different large language models (LLMs) in generating hallucinations while summarizing short documents. This ranking utilized Vectara's Hughes Hallucination Evaluation Model (HHEM-2.1), which is regularly updated to assess the frequency of false information introduced by these models in summaries. According to the latest data, the report highlighted key metrics such as hallucination rates, factual consistency rates, response rates, and average summary lengths of several popular models.